resolving purity-informativeness dilemma
Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning (Supplementary Material) A Complete Proof of Theorem 4.1
We prove Theorem 4.1 by mathematical induction, as follows: (1) the first layer's output satisfies (k 1) Consider each dimension's scalar output of By the composition rule of the non-decreasing function, applying any non-decreasing function does not change the order of its inputs. By mathematical induction, where Lemmas A.1 and A.2 constitute the base step, and Lemma A.3 is the inductive step, any non-negative-weighted MLP satisfies the skyline constraint. We train ResNet-18 using SGD with a momentum of 0.9 and a weight decay of 0.0005, and a batch size of 64. In the setup of open-set AL, the number of IN examples for training differs depending on the query strategy. We train MQ-Net for 100 epochs using SGD with a weight decay of 0.0005, and a mini-batch size of 64. Figure 5 shows the test accuracy of the target model throughout AL rounds on the three cross-datasets.
Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning (Supplementary Material) A Complete Proof of Theorem 4.1
We prove Theorem 4.1 by mathematical induction, as follows: (1) the first layer's output satisfies (k 1) Consider each dimension's scalar output of By the composition rule of the non-decreasing function, applying any non-decreasing function does not change the order of its inputs. By mathematical induction, where Lemmas A.1 and A.2 constitute the base step, and Lemma A.3 is the inductive step, any non-negative-weighted MLP satisfies the skyline constraint. We train ResNet-18 using SGD with a momentum of 0.9 and a weight decay of 0.0005, and a batch size of 64. In the setup of open-set AL, the number of IN examples for training differs depending on the query strategy. We train MQ-Net for 100 epochs using SGD with a weight decay of 0.0005, and a mini-batch size of 64. Figure 5 shows the test accuracy of the target model throughout AL rounds on the three cross-datasets.
Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning
Unlabeled data examples awaiting annotations contain open-set noise inevitably. A few active learning studies have attempted to deal with this open-set noise for sample selection by filtering out the noisy examples. However, because focusing on the purity of examples in a query set leads to overlooking the informativeness of the examples, the best balancing of purity and informativeness remains an important question. In this paper, to solve this purity-informativeness dilemma in open-set active learning, we propose a novel Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors. Specifically, by leveraging the multi-round property of active learning, we train MQ-Net using a query set without an additional validation set. Furthermore, a clear dominance relationship between unlabeled examples is effectively captured by MQ-Net through a novel skyline regularization.